www.design-reuse-embedded.com
Search Solutions  
OK
45 "Artificial Intelligence Processor IP" IP

1
ARC NPX6 NPU for Neural Processing
Synopsys ARC® NPX Neural Processor IP family provides a high-performance, power- and area-efficient IP solution for a range of applications requiring AI-enabled SoCs. The ARC NPX6 NPU IP is designed f...

2
ARC NPX6FS Neural Processors for Functional Safety
The ASIL B or D Ready Synopsys ARC NPX6FS NPUs enable automotive system-on-chip (SoC) designers to accelerate ISO 26262 certification of Advanced Driver Assistance Systems (ADAS) and autonomous vehicl...

3
Arm Ethos-N57 NPU
ML Inference Processor with Balanced Efficiency and Performance

Optimized for the most cost- and power-sensitive designs, Ethos-N57 delivers premium AI experiences in mainstream phones and digital ...


4
Arm Machine Learning Processor

The Arm Machine Learning processor is an optimized, ground-up design for machine learning acceleration, targeting mobile and adjacent markets. The solution consists of state-of-the-art optimized fi...


5
Cortex-M55

The Arm Cortex-M55 processor is Arm's most AI-capable Cortex-M processor and the first to feature Arm Helium vector processing technology. It brings energy-efficient digital signal processing (...


6
Cortex-M85
A New Milestone for High-Performance Microcontrollers Arm Cortex-M85 is the highest performing Cortex-M processor with Arm Helium technology and provides the natural upgrade path for Cortex-M based a...

7
Ethos-N78

Highly Scalable and Efficient Second-Generation ML Inference Processor

Build premium AI solutions at low cost in multiple market segments

Arm's second-generation, highly scalable an...


8
Ethos-U55 Embedded ML Inference for Cortex-M Systems

Unlock the Benefits of AI with this Best-in-Class Solution

The combination of world-class hardware IP, easy-to-use tools, open-source software, and a leading ecosystem means the Ethos-U55 mi...


9
Ethos-U65

AI Innovation for Edge and Endpoint Devices

Build low cost, highly efficient AI solutions in a wide range of embedded devices with Arm's latest addition to the Ethos-U microNPU family. Th...


10
EV74 processor IP for AI vision applications with 4 vector processing units
The DesignWare® ARC® EV71, EV72, and EV74 Embedded Vision Processor IP provides high performance, low power, area efficient solutions for a standalone computer vision and/or AI algorithms engine or as...

11
EV7x Vision Processors
The Synopsys EV7x Vision Processors heterogeneous architecture integrates vector DSP, vector FPU, and a neural network accelerator to provide a scalable solution for a wide range of current and emerg...

12
Neo NPU - Scalable and Power-Efficient Neural Processing Units
Highly scalable performance for classic and generative on-device and edge AI solutions The Cadence Neo NPUs offer energy-efficient hardware-based AI engines that can be paired with any host processor...

13
CEVA NeuPro-M- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
NeuPro-M redefines high-performance AI (Artificial Intelligence) processing for smart edge devices and edge compute with heterogeneous coprocessing, targeting generative and classic AI inferencing wor...

14
CEVA NeuPro-S - Edge AI Processor Architecture for Imaging & Computer Vision

NeuPro-S™ is a low power AI processor architecture for on-device deep learning inferencing, imaging and computer vision workloads.

While NeuPro-S provides a self-contained and speciali...


15
CMNP - Chips&Media Neural Processor
Chips&Media s CMNP, the new Neural Processing Unit (NPU) product, competes for high-performance neural processing-based IP for edge devices. CMNP provides exceptionally enhanced image quality based on...

16
Edge AI/ML accelerator (NPU)
TinyRaptor is a fully-programmable AI accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. TinyRaptor reduces the inference time and power consumption needed to run ...

17
NeuPro Family of AI Processors

Dedicated low power AI processor family for Deep Learning at the edge. Providing a self-contained, specialized AI processors, scaling in performance for a broad range of end markets including IoT, ...


18
Neural Network Processor IP Series for AI Vision and AI Voice
VeriSilicon’s Vivante VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices. VIP9000 Series’ patented Neural Networ...

19
4-/8-bit mixed-precision NPU IP
OPENEDGES, the world's only total memory system, and AI platform IP solution company, releases the first commercial mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT. When ENLIGHT is used with ot...

20
memBrain

As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functi...


21
AI Accelerator: Neural Network-specific Optimized 1 TOPS
The Expedera Origin™ E1 is a family of Artificial Intelligence (AI) processing cores individually optimized for a subset of neural networks commonly used in home appliances, edge nodes, and other smal...

22
Akida Neuromorphic IP
The Akida Neuromorphic IP offers unsurpassed performance on a performance-per-watt basis. The flexible Neural Processing Cores (NPCs) which form the Akida Neuron Fabric can be configured to perform co...

23
Deeply Embedded AI Accelerator for Microcontrollers and End-Point IoT Devices
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer’s model to enable virtually any class of neural net...

24
EFFICIERA Ultra low power AI inference accelerator
Efficiera is an ultra-low power AI inference accelerator IP specialized for CNN inference processing that runs as a circuit on FPGA or ASIC devices. The extremely low bit quantization technology min...

25
General Purpose Neural Processing Unit
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (SoC) developers, Quadrics General Purpose Neural Processing Unit (GPNPU)...

26
High-Performance Edge AI Accelerator
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer’s model to enable virtually any class of neural net...

27
nearbAI - IP cores for ultra-low power AI-enabled devices
Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler. It provides immediate visual and spatial feedback based on sensory inputs...

28
NMP-300 - Lowest Power and Cost End Point Accelerator

The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer's model to enable virtually any class of neu...


29
NMP-500 - Performance Efficiency Accelerator

The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer's model to enable virtually any class of neu...


30
NMP-700 - Performance Accelerator for Edge Computing

The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer's model to enable virtually any class of neu...


31
Performance Efficiency Leading AI Accelerator for Mobile and Edge Devices
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer’s model to enable virtually any class of neural net...

32
Ultra Low Power Edge AI Processor
DMP AI processor IP, ZIA™ DV740, is the ultra low power consumption processor IP for Deep Learning on edge side specialized on inference processing. ZIA™ DV740 enables inference processing on multiple...

33
v-MP6000UDX - Deep Learning and Vision Processor

The videantis processor is the most power-efficient and highest-performing visual processing architecture that you can license on the market. Whether you need to run deep learning algorithms, video...


34
v-MP6000UDX processor
Deep learning has quickly become a must-have technology to bring new smart sensing and intelligent analysis capabilities to all of our electronics. Whether it s self-driving cars that need to understa...

35
ZIA DV500 Series - Ultra Low Power Consumption Processor IP for Deep Learning
AI inference processor IP, which achieves smaller size and ultra-low power consumption by being optimized for object recognition and scene understanding often used in industrial equipment and automobi...

36
ZIA DV700 Series - Configurable AI inference processor IP
Configurable AI inference processor IP, which can optimize the performance and size and process all data such as images, videos, and sounds on the edge side where real-time property, safety, privacy p...

37
ZIA ISP- Small-size ISP IP ideal for AI camera systems
Small-size ISP (Image Signal Processing) IP ideal for AI camera systems.

38
Artificial Intelligence Cores

The only constant in the world of AI is change. Higher processing requirements and new algorithms are introduced regularly. In this environment a solution must have high flexibility and programmabi...


39
C860 High-performance 32-bit multi-core processor with AI acceleration engine
C860 utilizes a 12-stage superscalar pipeline, with a standard memory management unit, and can run Linux and other operating systems. It also utilizes a 3-issue and 8-execution deep out-of-order execu...

40
CortiCore - Neural Processing Engine
Roviero has developed a natively graph computing processor for edge inference. CortiCore architecture provides the solution via its unique instruction set that dramatically reduces the compiler comple...

 | 
 Previous Page
 | 1 | 2 | 
Next Page 
 | 
 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.